Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Lower-limb exoskeletons have the potential to transform the way we move1,2,3,4,5,6,7,8,9,10,11,12,13,14, but current state-of-the-art controllers cannot accommodate the rich set of possible human behaviours that range from cyclic and predictable to transitory and unstructured. We introduce a task-agnostic controller that assists the user on the basis of instantaneous estimates of lower-limb biological joint moments from a deep neural network. By estimating both hip and knee moments in-the-loop, our approach provided multi-joint, coordinated assistance through our autonomous, clothing-integrated exoskeleton. When deployed during 28 activities, spanning cyclic locomotion to unstructured tasks (for example, passive meandering and high-speed lateral cutting), the network accurately estimated hip and knee moments with an average R2 of 0.83 relative to ground truth. Further, our approach significantly outperformed a best-case task classifier-based method constructed from splines and impedance parameters. When tested on ten activities (including level walking, running, lifting a 25 lb (roughly 11 kg) weight and lunging), our controller significantly reduced user energetics (metabolic cost or lower-limb biological joint work depending on the task) relative to the zero torque condition, ranging from 5.3 to 19.7%, without any manual controller modifications among activities. Thus, this task-agnostic controller can enable exoskeletons to aid users across a broad spectrum of human activities, a necessity for real-world viability.more » « lessFree, publicly-accessible full text available November 14, 2025
-
Robotic lower-limb exoskeletons can augment human mobility, but current systems require extensive, context-specific considerations, limiting their real-world viability. Here, we present a unified exoskeleton control framework that autonomously adapts assistance on the basis of instantaneous user joint moment estimates from a temporal convolutional network (TCN). When deployed on our hip exoskeleton, the TCN achieved an average root mean square error of 0.142 newton-meters per kilogram across 35 ambulatory conditions without any user-specific calibration. Further, the unified controller significantly reduced user metabolic cost and lower-limb positive work during level-ground and incline walking compared with walking without wearing the exoskeleton. This advancement bridges the gap between in-lab exoskeleton technology and real-world human ambulation, making exoskeleton control technology viable for a broad community.more » « less
-
Objective: Real-time measurement of biological joint moment could enhance clinical assessments and generalize exoskeleton control. Accessing joint moments outside clinical and laboratory settings requires harnessing non-invasive wearable sensor data for indirect estimation. Previous approaches have been primarily validated during cyclic tasks, such as walking, but these methods are likely limited when translating to non-cyclic tasks where the mapping from kinematics to moments is not unique. Methods: We trained deep learning models to estimate hip and knee joint moments from kinematic sensors, electromyography (EMG), and simulated pressure insoles from a dataset including 10 cyclic and 18 non-cyclic activities. We assessed estimation error on combinations of sensor modalities during both activity types. Results: Compared to the kinematics-only baseline, adding EMG reduced RMSE by 16.9% at the hip and 30.4% at the knee (p<0.05) and adding insoles reduced RMSE by 21.7% at the hip and 33.9% at the knee (p<0.05). Adding both modalities reduced RMSE by 32.5% at the hip and 41.2% at the knee (p<0.05) which was significantly higher than either modality individually (p<0.05). All sensor additions improved model performance on non-cyclic tasks more than cyclic tasks (p<0.05). Conclusion: These results demonstrate that adding kinetic sensor information through EMG or insoles improves joint moment estimation both individually and jointly. These additional modalities are most important during non-cyclic tasks, tasks that reflect the variable and sporadic nature of the real-world. Significance: Improved joint moment estimation and task generalization is pivotal to developing wearable robotic systems capable of enhancing mobility in everyday life.more » « less
-
Estimating human joint moments using wearable sensors has utility for personalized health monitoring and generalized exoskeleton control. Data-driven models have potential to map wearable sensor data to human joint moments, even with a reduced sensor suite and without subject-specific calibration. In this study, we quantified the RMSE and R 2 of a temporal convolutional network (TCN), trained to estimate human hip moments in the sagittal plane using exoskeleton sensor data (i.e., a hip encoder and thigh- and pelvis-mounted inertial measurement units). We conducted three analyses in which we iteratively retrained the network while: 1) varying the input sequence length of the model, 2) incorporating noncausal data into the input sequence, thus delaying the network estimates, and 3) time shifting the labels to train the model to anticipate (i.e., predict) human hip moments. We found that 930 ms of causal input data maintained model performance while minimizing input sequence length (validation RMSE and R 2 of 0.141±0.014 Nm/kg and 0.883±0.025, respectively). Further, delaying the model estimate by up to 200 ms significantly improved model performance compared to the best causal estimators (p<0.05), improving estimator fidelity in use cases where delayed estimates are acceptable (e.g., in personalized health monitoring or diagnoses). Finally, we found that anticipating hip moments further in time linearly increased model RMSE and decreased R 2 (p<0.05); however, performance remained strong (R 2 >0.85) when predicting up to 200 ms ahead.more » « less
-
Autonomous lower-limb exoskeletons must modulate assistance based on locomotion mode (e.g., ramp or stair ascent) to adapt to the corresponding changes in human biological joint dynamics. However, current mode classification strategies for exoskeletons often require user-specific tuning, have a slow update rate, and rely on additional sensors outside of the exoskeleton sensor suite. In this study, we introduce a deep convolutional neural network-based locomotion mode classifier for hip exoskeleton applications using an open-source gait biomechanics dataset with various wearable sensors. Our approach removed the limitations of previous systems as it is 1) subject-independent (i.e., no user-specific data), 2) capable of continuously classifying for smooth and seamless mode transitions, and 3) only utilizes minimal wearable sensors native to a conventional hip exoskeleton. We optimized our model, based on several important factors contributing to overall performance, such as transition label timing, model architecture, and sensor placement, which provides a holistic understanding of mode classifier design. Our optimized DL model showed a 3.13% classification error (steady-state: 0.80 ± 0.38% and transitional: 6.49 ± 1.42%), outperforming other machine learning-based benchmarks commonly practiced in the field (p<0.05). Furthermore, our multi-modal analysis indicated that our model can maintain high performance in different settings such as unseen slopes on stairs or ramps. Thus, our study presents a novel locomotion mode framework, capable of advancing robotic exoskeleton applications toward assisting community ambulation.more » « less
-
Detection of the user’s walking is a critical part of exoskeleton technology for the full automation of smooth and seamless assistance during movement transitions. Researchers have taken several approaches in developing a walk detection system by using different kinds of sensors; however, only a few solutions currently exist which can detect these transitions using only the sensors embedded on a robotic hip exoskeleton (i.e., hip encoders and a trunk IMU), which is a critical consideration for implementing these systems in-the-loop of a hip exoskeleton controller. As a solution, we explored and developed two walk detection models that implemented a finite state machine as the models switched between walking and standing states using two transition conditions: stand-to-walk and walk-to-stand. One of our models dynamically detected the user’s gait cycle using two hip encoders and an IMU; the other model only used the two hip encoders. Our models were developed using a publicly available dataset and were validated online using a wearable sensor suite that contains sensors commonly embedded on robotic hip exoskeletons. The two models were then compared with a foot contact estimation method, which served as a baseline for evaluating our models. The results of our online experiments validated the performance of our models, resulting in 274 ms and 507 ms delay time when using the HIP+IMU and HIP ONLY model, respectively. Therefore, the walk detection models established in our study achieve reliable performance under multiple locomotive contexts without the need for manual tuning or sensors additional to those commonly implemented on robotic hip exoskeletons.more » « less
-
Step length is a critical gait parameter that allows a quantitative assessment of gait asymmetry. Gait asymmetry can lead to many potential health threats such as joint degeneration, difficult balance control, and gait inefficiency. Therefore, accurate step length estimation is essential to understand gait asymmetry and provide appropriate clinical interventions or gait training programs. The conventional method for step length measurement relies on using foot-mounted inertial measurement units (IMUs). However, this may not be suitable for real-world applications due to sensor signal drift and the potential obtrusiveness of using distal sensors. To overcome this challenge, we propose a deep convolutional neural network-based step length estimation using only proximal wearable sensors (hip goniometer, trunk IMU, and thigh IMU) capable of generalizing to various walking speeds. To evaluate this approach, we utilized treadmill data collected from sixteen able-bodied subjects at different walking speeds. We tested our optimized model on the overground walking data. Our CNN model estimated the step length with an average mean absolute error of 2.89 ± 0.89 cm across all subjects and walking speeds. Since wearable sensors and CNN models are easily deployable in real-time, our study findings can provide personalized real-time step length monitoring in wearable assistive devices and gait training programs.more » « less
An official website of the United States government
